10. (Optional) Challenge: Learning from Pixels

(Optional) Challenge: Learning from Pixels

After you have successfully completed the project, if you're looking for an additional challenge, you have come to the right place!

In the project, your agent learned from information such as its velocity, along with ray-based perception of objects around its forward direction. A more challenging task would be to learn directly from pixels!

This environment is almost identical to the project environment, where the only difference is that the state is an 84 x 84 RGB image, corresponding to the agent's first-person view of the environment.

## Download the Unity Environment

To solve this harder task, you'll need to download a new Unity environment.

You need only select the environment that matches your operating system:

Then, place the file in the p1_navigation/ folder in the DRLND GitHub repository, and unzip (or decompress) the file.

(For AWS) If you'd like to train the agent on AWS, you must follow the instructions to set up X Server, and then download the environment for the Linux operating system above.

Please do not submit a project with this new environment. You are required to complete the project with the version of the environment that was provided earlier in this lesson, in The Environment - Explore.

## Explore the Environment

After you have followed the instructions above, open Navigation_Pixels.ipynb (located in the p1_navigation/ folder in the DRLND GitHub repository) and follow the instructions to learn how to use the Python API to control the agent.

## Important Note

To solve this environment, you will need to design a convolutional neural network as the DQN architecture. For inspiration about how to set up this architecture, please refer to the DQN paper.

This task will take much longer to train than the project, and (unless you are very patient :D) you're encouraged to use a GPU to train. If you don't have a local GPU setup, you can learn how to train the project on AWS by following the instructions here. Note that it is not possible to train this agent in the provided Udacity Workspace.